Project � Rolls, sparseness and storing patterns in neural networks

Greg Detre

Wednesday, 14 March, 2001

Prof. Rolls� lab, EP

 

Bibliography

Rolls & Treves - Neural networks and brain function

 

Discussions

14/3/2001 - Prof Rolls, Simon and Thomas

Reading

Glossary

Handy commands

cc �std1 �o blackwell.out blackwell.c �lm

alias aliasname=�command�

blackwell 0.2 2 0.5 0.05 10000 0 0.02 ./devdir/cortmp ./devdir/inftmp ./devdir/sptmp < infile|quikfile

/np/src/nn/blackwell2/; /np/include/display

Questions

blackwell.c

controlling the sparseness is not biologically plausible, so this is just for statistical purposes, right???

is the hypothesis that the hippocampus and other areas use a low-graded signalling in order to store a relatively small number of memories with excellent retrieval???

what does patgen do??? what are the patterns???

don�t we lose synchronicity effects if we�re just using rates rather than individual spikes???

what�s a Kappa constant???

what�s the difference between w[][] and the connection matrix???

what does createkey() do � what�s the key???

how are we going to modify update to check all the neurons at once???

 

Rolls, Treves et. al 1997 paper on attractor neural networks (sparseness etc.)

gain factor g in the threshold linear activation function???

what�s the difference between �sparseness� and �mean activation of the net�??? is sparseness simply a measure of how many neurons are simultaneously active � how does that help???

how do you talk about sparseness in a non-binary net, since all the neurons are always active to a greater or lesser degree???

sparseness = for binary, mean activation = for non-binary�???

what�s the difference between connectivity and sparseness???

in the brain, how many levels can the neurons distinguish??? if you talk in terms of APs instead of firing rates, isn�t all this meaningless???

what does < h > mean???

what is �critical loading�???

is it when any more input causes all the patterns to break down???

asynchronous updating means picking neurons at random to update, doesn�t it??? does every neuron get picked the same number of times??? is it the same order each time??? is it one neuron per time step, or the whole network per time step???

direct perforant path???

where is the entorhinal system???

recurrent collaterals???

why aren�t the graphs perfect???

Misc

pattern associator vs autoassociator???

is the cerebellum a good example of an actual pattern associator in the brain???

can you get horribly complicated higher-dimensional pattern associators???

NNBF

what are the 3 types of network architecture discussed in NNBF???

is there a typo in NNBF about LTP/LTD???

My program

what should happen to the activation after it fires???

the activation function for a real neuron is a heaviside function, isn�t it???

does the Hebb rule incorporate LTD, as well???

how many neurons in pattern associator??? 2 sets of inputs, right???

is there a generic super-net that reduces to the different types in special cases???

is there any way that back-propagation might ever be plausible???

is the delta rule a form of back-propagation???

how do I incorporate temporal summation???